⚡️ Speed up function get_first_top_level_function_or_method_ast by 39% in PR #769 (clean-async-branch)
#781
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #769
If you approve this dependent PR, these changes will be merged into the original PR branch
clean-async-branch.📄 39% (0.39x) speedup for
get_first_top_level_function_or_method_astincodeflash/code_utils/static_analysis.py⏱️ Runtime :
1.43 milliseconds→1.03 milliseconds(best of61runs)📝 Explanation and details
The optimized code achieves a 38% speedup through several key micro-optimizations in AST traversal:
Primary optimizations:
Reduced tuple allocation overhead: Moving
skip_types = (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)to a local variable eliminates repeated tuple construction on each function call (128 calls show 0.5% overhead vs previous inline tuple creation).Improved iterator efficiency: Converting
ast.iter_child_nodes(node)tolist(ast.iter_child_nodes(node))upfront provides better cache locality and eliminates generator overhead during iteration, though this comes with a memory trade-off.Optimized control flow: Restructuring the isinstance checks to handle the common case (finding matching object_type) first, then using early
continuestatements to skip unnecessary processing, reduces the total number of isinstance calls from ~14,000 to ~11,000.Eliminated walrus operator complexity: Simplifying the class_node assignment in
get_first_top_level_function_or_method_astremoves the complex conditional expression, making the code path more predictable.Performance characteristics:
The line profiler shows the optimized version spends more time in the initial list conversion (49.9% vs 46% in the original iterator), but this is offset by faster subsequent processing of the child nodes.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-pr769-2025-09-27T03.15.10and push.